Two oracle inequalities for regularized boosting classifiers
نویسندگان
چکیده
منابع مشابه
Sparse oracle inequalities for variable selection via regularized quantization
We give oracle inequalities on procedures which combines quantization and variable selection via a weighted Lasso k-means type algorithm. The results are derived for a general family of weights, which can be tuned to size the influence of the variables in different ways. Moreover, these theoretical guarantees are proved to adapt the corresponding sparsity of the optimal codebooks, suggesting th...
متن کاملOn the Rate of Convergence of Regularized Boosting Classifiers
A regularized boosting method is introduced, for which regularization is obtained through a penalization function. It is shown through oracle inequalities that this method is model adaptive. The rate of convergence of the probability of misclassification is investigated. It is shown that for quite a large class of distributions, the probability of error converges to the Bayes risk at a rate fas...
متن کاملBoosting classifiers for drifting concepts
This paper proposes a boosting-like method to train a classifier ensemble from data streams. It naturally adapts to concept drift and allows to quantify the drift in terms of its base learners. The algorithm is empirically shown to outperform learning algorithms that ignore concept drift. It performs no worse than advanced adaptive time window and example selection strategies that store all the...
متن کاملMulticlass Boosting for Weak Classifiers
AdaBoost.M2 is a boosting algorithm designed for multiclass problems with weak base classifiers. The algorithm is designed to minimize a very loose bound on the training error. We propose two alternative boosting algorithms which also minimize bounds on performance measures. These performance measures are not as strongly connected to the expected error as the training error, but the derived bou...
متن کاملOracle Inequalities for Inverse Problems
We consider a sequence space model of statistical linear inverse problems where we need to estimate a function f from indirect noisy observations. Let a nite set of linear estimators be given. Our aim is to mimic the estimator in that has the smallest risk on the true f. Under general conditions, we show that this can be achieved by simple minimization of unbiased risk estimator, provided the s...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Statistics and Its Interface
سال: 2009
ISSN: 1938-7989,1938-7997
DOI: 10.4310/sii.2009.v2.n3.a2